-
Notifications
You must be signed in to change notification settings - Fork 221
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improving performance of batchgenerators #113
base: master
Are you sure you want to change the base?
Conversation
caching, optimizing conditionals, using tuples instead of lists, doing operations inplace
Using lru_cache for caching tuple creation
*unittest2 also has errors
* also adding minor improvements to utils functions (reformatting file, using lru_cache where possible)
…erands instead of transposing the higher dimensional ones
…private method in nnUNetTrainer
pandas unique is faster because it uses hashtable
Revisited, removed contiguous and from numpy calls and included them into the cast file
* + optimized augment brightness multiplicative
I am starting to review all your changes. There is a lot of stuff, thanks a lot! Might take me a while to do all that. That must have been so much work, wow! |
You're welcome! |
I am not confident either about how much the unittests cover which is why I would like to go through everything before approving. You have some pretty cool tricks up your sleeve about how you approach things. That's certainly a lot cleaner than the old batchgenerators implementation. |
Thanks you for your work, this is a nice tool for augmenting 3d images.
My changes come to improve the performance speed of various methods, to reduce the cpu time spent doing augmentations.
I've fully vectorized some augmentations and normalizations, while also using inplace numpy operations where applicable. Please ask if you have a question regarding any change.